Focusing on the issue that remote sensing fusion image based on Contourlet transform has low spatial resolution, a remote sensing image fusion algorithm based on Modified Contourlet Transform (MCT) was proposed. Firstly, the multi-spectral image was decomposed into intensity component, hue component and saturation component by Intensity-Hue-Saturation (IHS) transform; secondly, Modified Contourlet decomposition was done between the intensity component and the panchromatic image after histogram matching to get low-pass subband coefficients and high-pass subbands coefficients; and then, the low-pass subband coefficients were fused by the averaging method, and the high-pass subbands coefficients were merged by Novel Sum-Modified-Laplacian (NSML). Finally, the fusion result was regarded as the intensity component of multi-spectral image, and remote sensing fusion image was obtained by inverse IHS transform. Compared with the algorithms based on Principal Components Analysis (PCA) and Shearlet, based on PCA and wavelet, based on NonSubsampled Contourlet Transform (NSCT), the average gradient that was used for evaluating image sharpness of the proposed method respectively increased by 7.3%, 6.9% and 3.9%. The experimental results show that, the proposed method enhances the frequency localization of Contourlet transform and the utilization of decomposition coefficients, and on the basis of keeping multi-spectral information, it improves the spatial resolution of remote sensing fusion image effectively.
To improve the accuracy of focusd regions in multifocus image fusion based on multiscale transform, a multifocus image fusion algorithm was proposed based on NonSubsampled Shearlet Transform (NSST) and focused regions detection. Firstly, the initial fused image was acquired by the fusion algorithm based on NSST. Secondly, the initial focusd regions were obtained through comparing the initial fused image and the source multifocus images. And then, the morphological opening and closing was used to correct the initial focusd regions. Finally, the fused image was acquired by the Improved Pulse Coupled Neural Network (IPCNN) in the corrected focusd regions. The experimental results show that, compared with the classic image fusion algorithms based on wavelet or Shearlet, and the current popular algorithms based on NSST and Pulse Coupled Neural Network (PCNN), objective evaluation criterions including Mutual Information (MI), spatial frequency and transferred edge information of the proposed method are improved obviously. The result illustrates that the proposed method can identify the focusd regions of source images more accurately and extract more sharpness information of source images to fusion image.
To deal with the single-image scale-up problem, a super-resolution reconstruction algorithm based on dictionary learning and non-local similarity was proposed. The difference images between the high-resolution images and results of using iterative back-projection image reconstruction were obtained, and then the high and corresponding low dictionaries could be co-generated by training difference image patches and the corresponding low-resolution image patches via using K-Singular Value Decomposition (K-SVD) algorithm which was combined with the idea that the high and low dictionaries could be co-trained for super-resolution reconstruction. In addition, a non-local similarity regularization constraint was introduced in the new algorithm to further improve the quality of the reconstructed images. The experimental results show that the proposed algorithm achieves better results than learning-based algorithms in terms of both visual perception and objective evaluation.